Newton method for ?0-regularized optimization

نویسندگان

چکیده

As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to regularized optimization, which aims minimize the ?0 norm or its continuous surrogates that characterize sparsity. From continuity of discreteness norm, most challenging model ?0-regularized There an impressive body work on development numerical algorithms overcome this challenge. However, developed methods only ensure either (sub)sequence converges stationary point from deterministic optimization perspective distance between each iteration and any given reference bounded by error bound sense probability. In paper, we develop Newton-type method for prove generated sequence globally quadratically under standard assumptions, theoretically explaining our can perform surprisingly well.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized Newton method for unconstrained convex optimization

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...

متن کامل

A Modified Regularized Newton Method for Unconstrained Nonconvex Optimization

In this paper, we present a modified regularized Newton method for the unconstrained nonconvex optimization by using trust region technique. We show that if the gradient and Hessian of the objective function are Lipschitz continuous, then the modified regularized Newton method (M-RNM) has a global convergence property. Numerical results show that the algorithm is very efficient.

متن کامل

Accelerated Regularized Newton Method for Unconstrained Convex Optimization∗

We consider a global complexity bound of regularized Newton methods for the unconstrained convex optimization. The global complexity bound is an upper bound of the number of iterations required to get an approximate solution x such that f(x)− inf f(y) ≤ ε, where ε is a given positive constant. Recently, Ueda and Yamashita proposed the regularized Newton method whose global complexity bound is O...

متن کامل

A Regularized Newton Method without Line Search for Unconstrained Optimization∗

In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularized parameter instead of a step size in order to guarantee the global convergence. We demonstrate that it is closely related to the TR-Newton method when the Hessian of the objective function is positive definite. Moreover, it does not solve nonconvex problems but linear equations a...

متن کامل

Truncated regularized Newton method for convex minimizations

Recently, Li et al. (Comput. Optim. Appl. 26:131–147, 2004) proposed a regularized Newton method for convex minimization problems. The method retains local quadratic convergence property without requirement of the singularity of the Hessian. In this paper, we develop a truncated regularized Newton method and show its global convergence. We also establish a local quadratic convergence theorem fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Numerical Algorithms

سال: 2021

ISSN: ['1017-1398', '1572-9265']

DOI: https://doi.org/10.1007/s11075-021-01085-x